In "Rethinking AI for Good Governance," Helen Margetts, a professor at the University of Oxford and Programme Director for Public Policy at The Alan Turing Institute, London, proposes a general framework for the integration of artificial intelligence (AI) in governments as a justifiable means of improving current and subsequent federal administrations. Providing context for the relationship between federal governments and computational technology, Margetts initially discusses the origin of computers. She briefly explains its development as a fundamental weapon used during World War II by the Allied Powers against the Axis Powers (Margetts, 2022). During this time, computational technology was primarily under the expertise of the government; however, after World War II, computers became progressively outsourced to the public, leading to the eventual decline of unique technological proficiency in governments and the growth of technological developments driven by private sectors in countries (Margetts, 2022). Inevitably, within the past decade, governments realized the potential utility of computational technology (Margetts, 2022). Particularly, governments gained a newfound interest in artificial intelligence (Margetts, 2022). In the UK, "government announcements that mentioned data science and artificial intelligence rose from fifteen in 2015 to 272 in 2018" (Margetts, 2022, p. 361). In the United States, around 45 percent of federal agencies within the federal government experimented with AI and machine learning by the end of 2020 (Margetts, 2022). Although the use of artificial intelligence in governments is in an infantile stage, its integration seems promising (Margetts, 2022).
After discussing the emergent presence of computational technology within governments, Margetts explains her proposal for how artificial intelligence should be developed as it becomes more integrated into governments. One of the primary ways in which artificial intelligence can be incorporated into governments is detection, or "taking in information" (Margetts, 2022, p. 361). Before policy-making, governments need data and knowledge; therefore, they must be able to "detect (and then minimize) unwanted behavior by firms or individual citizens" (Margetts, 2022, p. 361). Governments can use machine learning,an application of artificial intelligence, to classify activity in wide sample sizes, such as the Internet (Margetts, 2022). Machine learning can detect "online harms such as hate speech, financial scams, problem gambling, bullying, misleading advertising, or extreme threats and cyberattacks" (Margetts, 2022, p. 362). Applying counter-adversarial technology, government agencies can create machine learning classifiers trained to automatically track violative content, reducing the chances of offenders working around and evading detection (Margetts, 2022). To train machine learning classifiers in detecting unwanted behavior, agencies can create generative adversarial networks (Margetts, 2022). In generative adversarial networks, two neural networks, machine learning that resembles human thinking, will work with each other (Margetts, 2022). One neural network will generate unique content based on a given data sample, such as the Internet, and the other neural network will attempt to determine if the content is from the real data sample or generated (Margetts, 2022). As both neural networks continue trialing, they will learn and improve, legitimizing machine learning as a viable way of helping government agencies detect and track information(Margetts,
2022).
Besides detection, artificial intelligence can vastly improve predictions made by the government (Margetts, 2022). Without access to large data samples, governments can not forecast accurately; however, with the help of artificial real data generated by machine learning, particularly neural networks and decision trees, agencies could make more accurate predictions (Margetts, 2022). A decision tree is a model that draws out all of the possible outcomes of a certain subject; with it, policymakers can determine the best possible solution to a problem (Margetts, 2022). These types of machine learning models "can greatly enhance the prioritization of sites for inspection or monitoring, from water pipes, factories, and restaurants to schools and hospitals, where early signs of failing organizations or worrying social trends may be picked up in transactional data" (Margetts, 2022, p. 363). Government agencies can also use artificial intelligenceto predict aggregate demand, "for example, in schools, prisons, or children's care facilities" (Margetts, 2022, p. 363). Machine learning can draw out data that reveals needs for the future, helping the government plan resources and prioritize significant matters (Margetts, 2022). For instance, machine learning was used during the COVID-19 pandemic from 2020 to 2021 to predict aggregate demand for resources "such as ventilators, nurses, and drug treatments toward those areas likely to be most affected" (Margetts, 2022, p. 363). Thus, with the implementation of artificial intelligence, governments can drastically improve predictions.
The third general aspect that artificial intelligence benefits in helping governments design policies is data-driven simulation (Margetts, 2022). Before the emergence of artificial intelligence, the primary way of testing out policies was through field experiments: "randomized trials in which the intervention is applied to a 'treatment group' and the results are compared with a 'control group'" (Margetts, 2022, p. 364). However, these experiments required
significant amounts of time and money, were subject to a variety of errors, and were inhibited by
ethical concerns (Margetts, 2022). With artificial intelligence and agent computing, governments do not need to rely on field experiments (Margetts, 2022). In a model based on agent computing, individual agents are generated, and each agent has a specific state and behavior; data is drawn out through the agent (Margetts, 2022). Agent computing artificially models humans and can be used in replacement of real-world test trials (Margetts, 2022). Policymakers can make proposals based on agent computing for a variety of situations (Margetts, 2022). In the case of law enforcement, agencies are using agent computing to model artificial policemen based on data from the activities of real policemen (Margetts, 2022). This modeling is used to test out "different levels of police resourcing and measure the potential effects on delivery of criminal justice" (Margetts, 2022, p. 365). Agent computing models can be used to simulate the allocation of resources, such as trained professionals, in public sector areas, such as education and health care (Margetts, 2022). The United Nations is currently using agent computing to assist developing countries in determining public sector policies to prioritize (Margetts, 2022). On its current pacing, further development of artificial intelligence canpotentially fully replace real-world testing to simulating, saving governments costs.
Margetts (2022) proposes specific implementations of artificial intelligence in the government. The first proposal is developing "new generalized models of policy-making" (p. 367). Traditionally, governments do not use transactional data to predict and simulate instances, resulting in a lack of information available for adequate policy-making (Margetts, 2022). Currently, data in the government "are compressed within files, available for checking individual pieces of information, but generating no usable data for analytics" (Margetts, 2022, p. 367). Since a lot of governmental data is not usable, there are substantial delays within policy-making especially in instances that have strict timelines (Margetts, 2022). For example, many countries lacked dataduring the COVID-19 pandemic, so interventions were unable to be produced timely (Margetts, 2022). Data was not generated in real-time; "in the United Kingdom, for example, it turned out that data for deaths were available only several weeks after the death had occurred" (Margetts, 2022, p. 367). When data was finally presented and became usable, simulated models on public health, health, care, education, and the economy, occurred individually(Margetts, 2022). There was a lack of communication between each sector, so legislative proposals only targeted single sectors such as "economic recovery or the health crisis" (Margetts, 2022, p. 367). Advanced machine learning could have created solutions that targeted multiple sectors at the same time, resulting in substantial amounts of progress (Margetts, 2022). If implemented correctly, artificial intelligence could help with generalized policy-making and effectively improve future crises.
The second and the more ambitious implementation of artificial intelligence presented by Margetts (2022) was using it to reform long-standing corrupt government practices in favor of equality and fairness. Machine learning can help with "identifying and reforming long-standing biases in resource allocation, decision-making, the administering of justice, and the delivery of services" (Margetts, 2022, p. 367). Current data collection and application in machine learning stem from data generatedby the existing governmental system, so biases and inequalities became explicitly depicted within the government (Margetts, 2022). For example, during the COVID-19 pandemic, machine learning detected "many structural inequalities in how citizens are treated-for example, in the delivery of health care to people from different ethnic groups-just as the mobilization around race has revealed systemic racism in police practice" (Margetts, 2022, p. 367-368). Therefore, in the future, agencies should use artificial intelligence to create new models that "produce unbiased resource allocation methods and decision support systems for public professionals, helping to make government better, in every sense of the word, than ever before" (Margetts, 2022, p. 368).
Margetts' "Rethinking AI for Good Governance" proposes many applicable implementations of artificial intelligence in governments, fitting the progressive growth of technology in modern society. Although her applications are comprehensible and in touch with the current infantile stages of the relationship between artificial intelligence and federal governments, her propositions seem to reside in an idealistic framework. Margetts suggests that artificial intelligence can significantly improve the substantial amount of corruption committed within governments, but she does not take in the perspective that policymakers may use artificial intelligence to further commit ethical violations. Since a utopian society does not exist,using advanced forms of artificial intelligence in the government may instead further repel people's trust in federal administrations. As mentioned in this journal article, Margetts (2022) pointed out that there was explicit proof that the government used biases and discrimination when drafting legislature during the COVID-19 pandemic.Given the long history of inequality present within the government, using artificial intelligence would alarm the public. Gaining the trust of the people would be significantly difficult even for a government that does not use advanced technology. Privacy and security concerns may already prohibit using artificial intelligence, and further development despite privacy invasion may cause irredeemable ethical violations.
In addition to skepticism of the reliability of government officials regarding the use of AI in goodwill, computational literacy within the government may lead to future legislatures that regulate the use of such technology in private sectors. If the government fully achieves the potential benefits of artificial intelligence, it may draft policies that limit how far private sector companies may use the technology, breaking the technology industry and free market rights.
Finally, one potential complex issue that will ensue from the progression of artificial intelligence in governments is hiring computer engineers. Margetts (2022) proposes that before any technological development is made within governments, progress must occur without the support of theprivate sector. Although technologies created within the private sector are far more advanced than technology used within the public sector, she believes that the government should develop literacy in artificial intelligence independent from for-profit organizations to ensure the absence of potential corruptive practices (Margetts, 2022). While this claim sounds appealing, it may be difficult for the government to incentivize computer engineers to work for the federation rather than private companies. If superior monetary incentives were to be offered, the government would be obliged to accumulate sufficient funds to pay its employees. Advancing artificial intelligence would indirectly cause an increase in taxes paid by the public or a redistribution of government spending upon different sectors. Regarding a potential increase in taxes, the public may cause an uproar as they scrutinize the benefits-to-cost ratio of the implementation of artificial intelligence. A failure or hindrance in the development of such technology in governments will ultimately result in the loss of taxpayer money, and the governments would need to face the potential repercussions and backlash set by the public. About the potential redistribution of government spending across different public sectors, funds specified for certain programs may need to be withdrawn and reallocated to the sector that would be responsible for developing artificial intelligence. For example, funding used for the military, healthcare, education, and infrastructure may inevitably reduce to increase the available budget for artificial intelligence. This may cause unforeseen challenges as other public sectors have a reduced budget in favor of developing a new sector. Taking the multitude of factors into account, budgeting is one of the most significant obstacles to consider when implementing AI in governance.
Providing an in-depth application of artificial intelligence in governmental agencies, Helen Margetts proposes well-ground suggestions for the further advancement of technology in the federation. However, paralleling the numerous amount of benefits for this endeavor, there are a lot of confounding factors that may inhibit its success. Currently, the use of artificial intelligenceis slowly growing within governments, and the future outcomes of this ambitious presentation canonly be revealed over time.
Margetts, H. (2022). Rethinking AI for Good Governance. Daedalus, 151(2), 360-371. https://www.jstor.org/stable/48662048